-
Notifications
You must be signed in to change notification settings - Fork 485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Benchmark concepts of service time and latency #5916
Conversation
Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
left some comments
Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
Signed-off-by: Naarcha-AWS <[email protected]>
@IanHoang: This is ready for your review again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just a few suggestions.
Co-authored-by: Heather Halter <[email protected]> Signed-off-by: Naarcha-AWS <[email protected]>
images/benchmark/service-time.png
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @Naarcha-AWS, I believe the service time measured in OSB is just the time taken from "request reached server" to "response provided by server". Please correct me if I am wrong. Thanks :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@IanHoang: What do you think? I can adjust the wording if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Naarcha-AWS Please see my comments and changes and let me know if you have any questions. Thanks!
_benchmark/user-guide/concepts.md
Outdated
- `search_clients` set to 1 | ||
- `target-throughput` set to 10 operations per second | ||
|
||
The following diagram shows the schedule built by OSB with the expected response time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should "time" be "times"?
Co-authored-by: Nathan Bower <[email protected]> Signed-off-by: Naarcha-AWS <[email protected]>
* Add Benchmark concepts of service time and latency Signed-off-by: Naarcha-AWS <[email protected]> * Fix typo Signed-off-by: Naarcha-AWS <[email protected]> * Add table, fix typos Signed-off-by: Naarcha-AWS <[email protected]> * A few more small tweaks Signed-off-by: Naarcha-AWS <[email protected]> * Apply suggestions from code review Signed-off-by: Naarcha-AWS <[email protected]> * Update concepts.md Signed-off-by: Naarcha-AWS <[email protected]> * Update concepts.md Signed-off-by: Naarcha-AWS <[email protected]> * Update concepts.md Signed-off-by: Naarcha-AWS <[email protected]> * Apply suggestions from code review Co-authored-by: Heather Halter <[email protected]> Signed-off-by: Naarcha-AWS <[email protected]> * Apply suggestions from code review Co-authored-by: Nathan Bower <[email protected]> Signed-off-by: Naarcha-AWS <[email protected]> --------- Signed-off-by: Naarcha-AWS <[email protected]> Signed-off-by: Naarcha-AWS <[email protected]> Co-authored-by: Heather Halter <[email protected]> Co-authored-by: Nathan Bower <[email protected]> (cherry picked from commit c56b2f6) Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
* Add Benchmark concepts of service time and latency * Fix typo * Add table, fix typos * A few more small tweaks * Apply suggestions from code review * Update concepts.md * Update concepts.md * Update concepts.md * Apply suggestions from code review * Apply suggestions from code review --------- (cherry picked from commit c56b2f6) Signed-off-by: Naarcha-AWS <[email protected]> Signed-off-by: Naarcha-AWS <[email protected]> Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Heather Halter <[email protected]> Co-authored-by: Nathan Bower <[email protected]>
| Metric | Common definition | **OpenSearch Benchmark definition** | | ||
| :--- | :--- |:--- | | ||
| **Throughput** | The number of operations completed in a given period of time. | The number of operations completed in a given period of time. | | ||
| **Service time** | The amount of time that the server takes to process a request, from the point it receives the request to the point the response is returned. </br></br> It includes the time spent waiting in server-side queues but _excludes_ network latency, load balancer overhead, and deserialization/serialization. | The amount of time that it takes for `opensearch-py` to send a request and receive a response from the OpenSearch cluster. </br> </br> It includes the amount of time that it takes for the server to process a request and also _includes_ network latency, load balancer overhead, and deserialization/serialization. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct me if I am wrong @dblock, @IanHoang.
In OpenSearch benchmarks, Service Time is calculated in osbenchmark/client.py using the following trace configuration:
trace_config = aiohttp.TraceConfig()
trace_config.on_request_start.append(on_request_start)
trace_config.on_request_end.append(on_request_end)
I believe the first definition (common definition) in the above documentation code aligns more accurately with our interpretation of Service Time.
Service Time: Represents the interval from the server receiving the request to the server sending the response.
Additional Information:
I attempted to calculate the times by implementing a function perform_request within the AIOHttpConnection in osbenchmark/async_connection.py. The times obtained indicate that the calculated service time doesn't include client processing time.
async def perform_request(self, method, url, params=None, body=None, timeout=None, ignore=(), headers=None):
print("AIOHttpConnection perform_request start time", time.perf_counter())
status, headers, raw_data = await super().perform_request(method=method, url=url, params=params, body=body, timeout=timeout, ignore=ignore, headers=self.headers)
print("AIOHttpConnection perform_request end time", time.perf_counter())
return status, headers, raw_data
Sample output:
AIOHttpConnection perform_request start time 19.0016295
AIOHttpConnection perform_request end time 19.007145792
service time start 19.001905542
service time end 19.007082167
service time 0.005176625000000712
AIOHttpConnection perform_request start time 19.008452584
AIOHttpConnection perform_request end time 19.012769334
service time start 19.008625875
service time end 19.012713375
service time 0.004087500000000688
Checklist
For more information on following Developer Certificate of Origin and signing off your commits, please check here.